IS

Li, Xiao-Bai

Topic Weight Topic Terms
0.730 data classification statistical regression mining models neural methods using analysis techniques performance predictive networks accuracy
0.640 data database administration important dictionary organizations activities record increasingly method collection records considered perturbation requirements
0.564 set approach algorithm optimal used develop results use simulation experiments algorithms demonstrate proposed optimization present
0.398 privacy information concerns individuals personal disclosure protection concern consumers practices control data private calculus regulation

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Sarkar, Sumit 3
Privacy 3 Data Mining 2 data swapping 2 anonymization 1
Bayesian Estimation 1 Data Confidentiality 1 data partitioning 1 data analytics 1
Linear programming 1 record linkage 1 regression 1 regression trees 1

Articles (3)

Digression and Value Concatenation to Enable Privacy-Preserving Regression (MIS Quarterly, 2014)
Authors: Abstract:
    Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals’ sensitive data. This problem, which we call a regression attack, has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression, which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic, and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
Protecting Privacy Against Record Linkage Disclosure: A Bounded Swapping Approach for Numeric Data. (Information Systems Research, 2011)
Authors: Abstract:
    Record linkage techniques have been widely used in areas such as antiterrorism, crime analysis, epidemiologic research, and database marketing. On the other hand, such techniques are also being increasingly used for identity matching that leads to the disclosure of private information. These techniques can be used to effectively reidentify records even in deidentified data. Consequently, the use of such techniques can lead to individual privacy being severely eroded. Our study addresses this important issue and provides a solution to resolve the conflict between privacy protection and data utility. We propose a data-masking method for protecting private information against record linkage disclosure that preserves the statistical properties of the data for legitimate analysis. Our method recursively partitions a data set into smaller subsets such that data records within each subset are more homogeneous after each partition. The partition is made orthogonal to the maximum variance dimension represented by the first principal component in each partitioned set. The attribute values of a record in a subset are then masked using a double-bounded swapping method. The proposed method, which we call multivariate swapping trees, is nonparametric in nature and does not require any assumptions about statistical distributions of the original data. Experiments conducted on real-world data sets demonstrate that the proposed approach significantly outperforms existing methods in terms of both preventing identity disclosure and preserving data quality.
Privacy Protection in Data Mining: A Perturbation Approach for Categorical Data. (Information Systems Research, 2006)
Authors: Abstract:
    To respond to growing concerns about privacy of personal information, organizations that use their customers' records in data-mining activities are forced to take actions to protect the privacy of the individuals involved. A common practice for many organizations today is to remove identity-related attributes from the customer records before releasing them to data miners or analysts. We investigate the effect of this practice and demonstrate that many records in a data set could be uniquely identified even after identity-related attributes are removed. We propose a perturbation method for categorical data that can be used by organizations to prevent or limit disclosure of confidential data for identifiable records when the data are provided to analysts for classification, a common data-mining task. The proposed method attempts to preserve the statistical properties of the data based on privacy protection parameters specified by the organization. We show that the problem can be solved in two phases, with a linear programming formulation in Phase I (to preserve the first-order marginal distribution), followed by a simple Bayes-based swapping procedure in Phase II (to preserve the joint distribution). Experiments conducted on several real-world data sets demonstrate the effectiveness of the proposed method.